# Distilled large model
Deepseek R1 Distill Qwen 14B Uncensored
MIT
A distilled model based on the transformers library, developed by DeepSeek-AI through knowledge distillation from the Qwen-14B model
Large Language Model
Transformers

D
thirdeyeai
304
5
Deepseek R1 Distill Qwen 32B 4bit
This is the MLX 4-bit quantized version of the DeepSeek-R1-Distill-Qwen-32B model, designed for efficient inference on Apple silicon devices
Large Language Model
Transformers

D
mlx-community
130.79k
40
Featured Recommended AI Models